78 research outputs found

    Fundamental Limits of Coded Caching: Improved Delivery Rate-Cache Capacity Trade-off

    Get PDF
    A centralized coded caching system, consisting of a server delivering N popular files, each of size F bits, to K users through an error-free shared link, is considered. It is assumed that each user is equipped with a local cache memory with capacity MF bits, and contents can be proactively cached into these caches over a low traffic period; however, without the knowledge of the user demands. During the peak traffic period each user requests a single file from the server. The goal is to minimize the number of bits delivered by the server over the shared link, known as the delivery rate, over all user demand combinations. A novel coded caching scheme for the cache capacity of M= (N-1)/K is proposed. It is shown that the proposed scheme achieves a smaller delivery rate than the existing coded caching schemes in the literature when K > N >= 3. Furthermore, we argue that the delivery rate of the proposed scheme is within a constant multiplicative factor of 2 of the optimal delivery rate for cache capacities 1/K N >= 3.Comment: To appear in IEEE Transactions on Communication

    Coded Caching for a Large Number Of Users

    Full text link
    Information theoretic analysis of a coded caching system is considered, in which a server with a database of N equal-size files, each F bits long, serves K users. Each user is assumed to have a local cache that can store M files, i.e., capacity of MF bits. Proactive caching to user terminals is considered, in which the caches are filled by the server in advance during the placement phase, without knowing the user requests. Each user requests a single file, and all the requests are satisfied simultaneously through a shared error-free link during the delivery phase. First, centralized coded caching is studied assuming both the number and the identity of the active users in the delivery phase are known by the server during the placement phase. A novel group-based centralized coded caching (GBC) scheme is proposed for a cache capacity of M = N/K. It is shown that this scheme achieves a smaller delivery rate than all the known schemes in the literature. The improvement is then extended to a wider range of cache capacities through memory-sharing between the proposed scheme and other known schemes in the literature. Next, the proposed centralized coded caching idea is exploited in the decentralized setting, in which the identities of the users that participate in the delivery phase are assumed to be unknown during the placement phase. It is shown that the proposed decentralized caching scheme also achieves a delivery rate smaller than the state-of-the-art. Numerical simulations are also presented to corroborate our theoretical results

    Computation Scheduling for Distributed Machine Learning with Straggling Workers

    Get PDF
    We study scheduling of computation tasks across n workers in a large scale distributed learning problem with the help of a master. Computation and communication delays are assumed to be random, and redundant computations are assigned to workers in order to tolerate stragglers. We consider sequential computation of tasks assigned to a worker, while the result of each computation is sent to the master right after its completion. Each computation round, which can model an iteration of the stochastic gradient descent (SGD) algorithm, is completed once the master receives k distinct computations, referred to as the computation target. Our goal is to characterize the average completion time as a function of the computation load, which denotes the portion of the dataset available at each worker, and the computation target. We propose two computation scheduling schemes that specify the tasks assigned to each worker, as well as their computation schedule, i.e., the order of execution. Assuming a general statistical model for computation and communication delays, we derive the average completion time of the proposed schemes. We also establish a lower bound on the minimum average completion time by assuming prior knowledge of the random delays. Experimental results carried out on Amazon EC2 cluster show a significant reduction in the average completion time over existing coded and uncoded computing schemes. It is also shown numerically that the gap between the proposed scheme and the lower bound is relatively small, confirming the efficiency of the proposed scheduling design.Comment: Submitted for publicatio

    Probabilistic Load Flow based on Parameterized Probability-boxes for Systems with Insufficient Information

    Get PDF
    The increased penetration of intermittent renewable energy sources and random loads has caused many uncertainties in the power system. It is essential to analyze the effect of these uncertain factors on the behavior of the power system. This study presents a new powerful approach called probability-boxes (p-boxes) to consider these uncertainties by combining interval and probability simultaneously. The proposed method is appropriate for problems with insufficient information. In this paper, the uncertainty of distribution functions is modeled according to the influence of natural factors such as light intensity and wind speed. First, the p-boxes load flow problem is studied using an appropriate point estimation method to calculate statistical moments of probabilistic load flow (PLF) outputs. Then, the Cornish–Fisher expansion series is used to obtain the probability bounds. The proposed approach is analyzed on the IEEE 14-bus, and IEEE 118-bus test systems consist of loads, solar farms, and wind farms as p-boxes input variables. The obtained results are compared with the double-loop sampling (DLS) approach to show the proposed method’s precision and efficiency.©2021 The Authors. Published by IEEE. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/This work has been funded by Academy of Finland (Grant Number: Profi4/WP2)fi=vertaisarvioitu|en=peerReviewed
    • …
    corecore